-
-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add opt-in multithreading #85
Conversation
Interesting! I didn't realize we were this close to supporting this use case. What's your reasoning for making this opt-in (through |
Mostly so the change was non-breaking for existing users @vweevers. Before I implemented the flag, a few of the unit tests broke so assumed there might be users out there somewhere that are also counting on that behavior. The flag can be stripped out though or I suppose can default to |
There is a workflow issue due to python issue on macos. Would you like that fixed @vweevers? |
Fixed in main: 3b1a6f2 |
You'll need to rebase your branch to get the fix. |
352a7bc
to
1f4f0c0
Compare
Thanks for the quick fix @vweevers. Rebased!! |
Yeah. We can do that in a future semver-major version. It would simplify the code a bit, but I think it's good to introduce this feature gradually. |
@vweevers I think I got all of your comments addressed. Please let me know if there is anything else that you would like updated. |
It might be surprising that with const location = tempy.directory()
const db1 = new ClassicLevel(location)
await db1.open()
t.is(db1.location, location) One would expect to have exclusive access to this db location. But the code that follows shows the opposite: const db2 = new ClassicLevel(location)
await db2.open({ multithreading: true })
t.is(db2.location, location) In other words, I think |
Sounds good!! Updated to achieve that behavior @vweevers. I also added another test case to verify the fix |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! I'll wait with merging for a few days, to give others a chance to review (@juliangruber @ralphtheninja if you're able 🙏).
Hi there @vweevers! Was curious if @juliangruber or @ralphtheninja were interested in taking a bite at the review apple? I will be happy to respond to any comments or concerns. |
Aaah! So basically we have a layer inbetween db instances and the corresponding location. Looks good! |
Thanks @ralphtheninja! Much appreciated!! @juliangruber do you have any comments or concerns? @vweevers was curious before merging this... Just a nudge so we can get this imported into the chainsafe/lodestar repo |
That'll do. |
@vweevers not sure why this unit test failed on the release... https://github.com/Level/classic-level/actions/runs/6997605304/job/19034756860 but passed here: UPDATED: looks like there is a race condition. also failed here: |
Could be a race issue; can you check? I'll wait with the npm release. For future reference (when the logs are deleted), the error is:
|
Yep. Will take a look this evening |
We have a use case for utilizing our level db from multiple worker threads. The base library did most of the heavy lifting and this PR allows the bindings to be used across workers.
Methodology
Context-awareness was preserved and so was threadsafe operation from the node.js perspective. The open and close functionality was all that needed to change and a single mutex protects the setup. A flag was added so the feature is non-breaking and only provides additional functionality.
The open_db function searches for a record in a
db_handles
map, and if it exists theopen_handle_count
is incremented for that database. This was used to allow for multiple databases to be accessed per process. The reverse happens when the destructor or the closed_db method is called. If the handle is the last to be closed then the db instance is deleted from the map to cleanup the nativeLOCK
.